Robust Temporal Smoothness in Multi-Task Learning

نویسندگان

چکیده

Multi-task learning models based on temporal smoothness assumption, in which each time point of a sequence points concerns task prediction, assume the adjacent tasks are similar to other. However, effect outliers is not taken into account. In this paper, we show that even only one outlier will destroy performance entire model. To solve problem, propose two Robust Temporal Smoothness (RoTS) frameworks. Compared with existing relation, our methods chase information but identify tasks, however, without increasing computational complexity. Detailed theoretical analyses presented evaluate methods. Experimental results synthetic and real-life datasets demonstrate effectiveness We also discuss several potential specific applications extensions RoTS

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online Multi-Task Gradient Temporal-Difference Learning

Reinforcement learning (RL) is an essential tool in designing autonomous systems, yet RL agents often require extensive experience to achieve optimal behavior. This problem is compounded when an RL agent is required to learn policies for different tasks within the same environment or across multiple environments. In such situations, learning task models jointly rather than independently can sig...

متن کامل

Learning Multi-Level Task Groups in Multi-Task Learning

In multi-task learning (MTL), multiple related tasks are learned jointly by sharing information across them. Many MTL algorithms have been proposed to learn the underlying task groups. However, those methods are limited to learn the task groups at only a single level, which may be not sufficient to model the complex structure among tasks in many real-world applications. In this paper, we propos...

متن کامل

Multi-Objective Multi-Task Learning

This dissertation presents multi-objective multi-task learning, a new learning framework. Given a fixed sequence of tasks, the learned hypothesis space must minimize multiple objectives. Since these objectives are often in conflict, we cannot find a single best solution, so we analyze a set of solutions. We first propose and analyze a new learning principle, empirically efficient learning. From...

متن کامل

Multi-Task Multi-Sample Learning

In the exemplar SVM (E-SVM) approach of Malisiewicz et al., ICCV 2011, an ensemble of SVMs is learnt, with each SVM trained independently using only a single positive sample and all negative samples for the class. In this paper we develop a multi-sample learning (MSL) model which enables joint regularization of the E-SVMs without any additional cost over the original ensemble learning. The adva...

متن کامل

Learning High-Order Task Relationships in Multi-Task Learning

Multi-task learning is a way of bringing inductive transfer studied in human learning to the machine learning community. A central issue in multi-task learning is to model the relationships between tasks appropriately and exploit them to aid the simultaneous learning of multiple tasks effectively. While some recent methods model and learn the task relationships from data automatically, only pai...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i9.26351